17 research outputs found

    The quality of experience of emerging display technologies

    Get PDF
    As new display technologies emerge and become part of everyday life, the understanding of the visual experience they provide becomes more relevant. The cognition of perception is the most vital component of visual experience; however, it is not the only cognition that contributes to the complex overall experience of the end-user. Expectations can create significant cognitive bias that may even override what the user genuinely perceives. Even if a visualization technology is somewhat novel, expectations can be fuelled by prior experiences gained from using similar displays and, more importantly, even a single word or an acronym may induce serious preconceptions, especially if such word suggests excellence in quality. In this interdisciplinary Ph.D. thesis, the effect of minimal, one-word labels on the Quality of Experience (QoE) is investigated in a series of subjective tests. In the studies carried out on an ultra-high-definition (UHD) display, UHD video contents were directly compared to their HD counterparts, with and without labels explicitly informing the test participants about the resolution of each stimulus. The experiments on High Dynamic Range (HDR) visualization addressed the effect of the word “premium” on the quality aspects of HDR video, and also how this may affect the perceived duration of stalling events. In order to support the findings, additional tests were carried out comparing the stalling detection thresholds of HDR video with conventional Low Dynamic Range (LDR) video. The third emerging technology addressed by this thesis is light field visualization. Due to its novel nature and the lack of comprehensive, exhaustive research on the QoE of light field displays and content parameters at the time of this thesis, instead of investigating the labeling effect, four phases of subjective studies were performed on light field QoE. The first phases started with fundamental research, and the experiments progressed towards the concept and evaluation of the dynamic adaptive streaming of light field video, introduced in the final phase

    How I met your V2X sensor data : analysis of projection-based light field visualization for vehicle-to-everything communication protocols and use cases

    Get PDF
    The practical usage of V2X communication protocols started emerging in recent years. Data built on sensor information are displayed via onboard units and smart devices. However, perceptually obtaining such data may be counterproductive in terms of visual attention, particularly in the case of safety-related applications. Using the windshield as a display may solve this issue, but switching between 2D information and the 3D reality of traffic may introduce issues of its own. To overcome such difficulties, automotive light field visualization is introduced. In this paper, we investigate the visualization of V2X communication protocols and use cases via projection-based light field technology. Our work is motivated by the abundance of V2X sensor data, the low latency of V2X data transfer, the availability of automotive light field prototypes, the prevalent dominance of non-autonomous and non-remote driving, and the lack of V2X-based light field solutions. As our primary contributions, we provide a comprehensive technological review of light field and V2X communication, a set of recommendations for design and implementation, an extensive discussion and implication analysis, the exploration of utilization based on standardized protocols, and use-case-specific considerations

    Through a different lens : the perceived quality of light field visualization assessed by test participants with imperfect visual acuity and color blindness

    Get PDF
    With the emergence of commercially-available light field displays, both industry and academia have begun research on the potential use cases of future society. However, while there is an unfortunate global trend that the eyesight-related issues are getting more common among the new generations, such individuals are underrepresented in light field research. In this paper, we present the results of the series of subjective tests carried out on light field displays, exclusively with test participants that otherwise would not qualify to assess visualization quality in a regular study. The investigated topics include spatial resolution, angular resolution and viewing distance

    Towards reconstructing HDR light fields by combining 2D and 3D CNN architectures

    Get PDF
    High dynamic range imaging has recently become a technological trend, with numerous attempts to reconstruct HDR images and videos from low-dynamic-range data. The reconstruction of light field images is analogous to the reconstruction of HDR videos, within which consecutive frames are temporally coherent. For light field images, many similarities exist between the adjacent views, since they visualize the same scene from different angular perspectives. In this paper, we investigate the theoretical possibilities of combining CNN architectures utilized for HDR images and videos in order to enhance the outputs of HDR light field image reconstruction

    Canonical 3D object orientation for interactive light-field visualization

    Get PDF
    Light-field visualization allows the users to freely choose a preferred location for observation within the display’s valid field of view. As such 3D visualization technology offers continuous motion parallax, the users location determines the perceived orientation of the visualized content, if we consider static objects and scenes. In case of interactive light-field visualization, the arbitrary rotation of content enables efficient orientation changes without the need for actual user movement. However, the preference of content orientation is a subjective matter, yet it is possible to be objectively managed and assessed as well. In this paper, we present a series of subjective tests we carried out on a real light-field display that addresses static content orientation preference. The state-of-the-art objective methodologies were used to evaluate the experimental setup and the content. We used the subjective results in order to develop our own objective metric for canonical orientation selection

    Towards Euclidean auto-calibration of stereo camera arrays

    Get PDF
    Multi-camera networks are becoming ubiquitous in a variety of applications related to medical imaging, education, entertainment, autonomous vehicles, civil security, defense etc. The foremost task in deploying a multi-camera network is camera calibration, which usually involves introducing an object with known geometry into the scene. However, most of the aforementioned applications necessitate non-intrusive automatic camera calibration. To this end, a class of camera auto-calibration methods imposes constraints on the camera network rather than on the scene. In particular, the inclusion of stereo cameras in a multi-camera network is known to improve calibration accuracy and preserve scale. Yet most of the methods relying on stereo cameras use custom-made stereo pairs, and such stereo pairs can definitely be considered imperfect; while the baseline distance can be fixed, one cannot guarantee the optical axes of two cameras to be parallel in such cases. In this paper, we propose a characterization of the imperfections in those stereo pairs with the assumption that such imperfections are within a considerably small, reasonable deviation range from the ideal values. Once the imperfections are quantified, we use an auto-calibration method to calibrate a set of stereo cameras. We provide a comparison of these results with those obtained under parallel optical axes assumption. The paper also reports results obtained from the utilization of synthetic visual data
    corecore